对国际气候变化小组(IPCC)的第六次评估指出,“过去十年(2010-2019)的累积净二氧化碳排放量与剩下的11个碳预算可能会限制为1.5C(中等信心)大约相同)。”这样的报告直接培养了公众的话语,但是诸如信念和信心程度之类的细微差别常常失去。在本文中,我们提出了一个正式的帐户,以允许在抽象论证设置中使用这种信念和相关的信心来标记论证。与概率论证中的其他建议不同,我们关注对Sato分布语义的选择构建的概率推断的任务,Sato的分布语义已被证明涵盖了包括贝叶斯网络的语义在内的各种情况。从有关此类语义的大量文献中借用,我们研究了如何在考虑不确定概率的情况下在实践中处理此类任务,并与现有的概率论点的现有建议讨论联系。
translated by 谷歌翻译
许多软件系统,例如在线社交网络,使用户能够共享有关自己的信息。尽管共享的行动很简单,但它需要关于隐私的精心思考过程:与谁共享,分享谁以及出于什么目的。考虑到这些内容的每个内容都很乏味。解决此问题的最新方法可以建立个人助理,可以通过学习随着时间的推移而了解私人的内容,并推荐诸如私人或公共的隐私标签,以便用户认为共享的个人内容。但是,隐私本质上是模棱两可和高度个人化的。推荐隐私决策的现有方法不能充分解决隐私的这些方面。理想情况下,考虑到用户的隐私理解,个人助理应该能够根据给定用户调整其建议。此外,个人助理应该能够评估其建议何时不确定,并让用户自己做出决定。因此,本文提出了一个使用证据深度学习的个人助理来根据其隐私标签对内容进行分类。个人助理的一个重要特征是,它可以明确地在决策中对其不确定性进行建模,确定其不知道答案,并在不确定性高时委派提出建议。通过考虑用户对隐私的理解,例如风险因素或自己的标签,个人助理可以个性化每个用户的建议。我们使用众所周知的数据集评估我们建议的个人助理。我们的结果表明,我们的个人助理可以准确地确定不确定的情况,将其个性化满足用户的需求,从而帮助用户良好地保护其隐私。
translated by 谷歌翻译
In inverse reinforcement learning (IRL), a learning agent infers a reward function encoding the underlying task using demonstrations from experts. However, many existing IRL techniques make the often unrealistic assumption that the agent has access to full information about the environment. We remove this assumption by developing an algorithm for IRL in partially observable Markov decision processes (POMDPs). We address two limitations of existing IRL techniques. First, they require an excessive amount of data due to the information asymmetry between the expert and the learner. Second, most of these IRL techniques require solving the computationally intractable forward problem -- computing an optimal policy given a reward function -- in POMDPs. The developed algorithm reduces the information asymmetry while increasing the data efficiency by incorporating task specifications expressed in temporal logic into IRL. Such specifications may be interpreted as side information available to the learner a priori in addition to the demonstrations. Further, the algorithm avoids a common source of algorithmic complexity by building on causal entropy as the measure of the likelihood of the demonstrations as opposed to entropy. Nevertheless, the resulting problem is nonconvex due to the so-called forward problem. We solve the intrinsic nonconvexity of the forward problem in a scalable manner through a sequential linear programming scheme that guarantees to converge to a locally optimal policy. In a series of examples, including experiments in a high-fidelity Unity simulator, we demonstrate that even with a limited amount of data and POMDPs with tens of thousands of states, our algorithm learns reward functions and policies that satisfy the task while inducing similar behavior to the expert by leveraging the provided side information.
translated by 谷歌翻译
Data-driven soft sensors are extensively used in industrial and chemical processes to predict hard-to-measure process variables whose real value is difficult to track during routine operations. The regression models used by these sensors often require a large number of labeled examples, yet obtaining the label information can be very expensive given the high time and cost required by quality inspections. In this context, active learning methods can be highly beneficial as they can suggest the most informative labels to query. However, most of the active learning strategies proposed for regression focus on the offline setting. In this work, we adapt some of these approaches to the stream-based scenario and show how they can be used to select the most informative data points. We also demonstrate how to use a semi-supervised architecture based on orthogonal autoencoders to learn salient features in a lower dimensional space. The Tennessee Eastman Process is used to compare the predictive performance of the proposed approaches.
translated by 谷歌翻译
This study uses multisensory data (i.e., color and depth) to recognize human actions in the context of multimodal human-robot interaction. Here we employed the iCub robot to observe the predefined actions of the human partners by using four different tools on 20 objects. We show that the proposed multimodal ensemble learning leverages complementary characteristics of three color cameras and one depth sensor that improves, in most cases, recognition accuracy compared to the models trained with a single modality. The results indicate that the proposed models can be deployed on the iCub robot that requires multimodal action recognition, including social tasks such as partner-specific adaptation, and contextual behavior understanding, to mention a few.
translated by 谷歌翻译
The lack of standardization is a prominent issue in magnetic resonance (MR) imaging. This often causes undesired contrast variations due to differences in hardware and acquisition parameters. In recent years, MR harmonization using image synthesis with disentanglement has been proposed to compensate for the undesired contrast variations. Despite the success of existing methods, we argue that three major improvements can be made. First, most existing methods are built upon the assumption that multi-contrast MR images of the same subject share the same anatomy. This assumption is questionable since different MR contrasts are specialized to highlight different anatomical features. Second, these methods often require a fixed set of MR contrasts for training (e.g., both Tw-weighted and T2-weighted images must be available), which limits their applicability. Third, existing methods generally are sensitive to imaging artifacts. In this paper, we present a novel approach, Harmonization with Attention-based Contrast, Anatomy, and Artifact Awareness (HACA3), to address these three issues. We first propose an anatomy fusion module that enables HACA3 to respect the anatomical differences between MR contrasts. HACA3 is also robust to imaging artifacts and can be trained and applied to any set of MR contrasts. Experiments show that HACA3 achieves state-of-the-art performance under multiple image quality metrics. We also demonstrate the applicability of HACA3 on downstream tasks with diverse MR datasets acquired from 21 sites with different field strengths, scanner platforms, and acquisition protocols.
translated by 谷歌翻译
A new development in NLP is the construction of hyperbolic word embeddings. As opposed to their Euclidean counterparts, hyperbolic embeddings are represented not by vectors, but by points in hyperbolic space. This makes the most common basic scheme for constructing document representations, namely the averaging of word vectors, meaningless in the hyperbolic setting. We reinterpret the vector mean as the centroid of the points represented by the vectors, and investigate various hyperbolic centroid schemes and their effectiveness at text classification.
translated by 谷歌翻译
Facial recognition is fundamental for a wide variety of security systems operating in real-time applications. In video surveillance based face recognition, face images are typically captured over multiple frames in uncontrolled conditions; where head pose, illumination, shadowing, motion blur and focus change over the sequence. We can generalize that the three fundamental operations involved in the facial recognition tasks: face detection, face alignment and face recognition. This study presents comparative benchmark tables for the state-of-art face recognition methods by testing them with same backbone architecture in order to focus only on the face recognition solution instead of network architecture. For this purpose, we constructed a video surveillance dataset of face IDs that has high age variance, intra-class variance (face make-up, beard, etc.) with native surveillance facial imagery data for evaluation. On the other hand, this work discovers the best recognition methods for different conditions like non-masked faces, masked faces, and faces with glasses.
translated by 谷歌翻译
The global Information and Communications Technology (ICT) supply chain is a complex network consisting of all types of participants. It is often formulated as a Social Network to discuss the supply chain network's relations, properties, and development in supply chain management. Information sharing plays a crucial role in improving the efficiency of the supply chain, and datasheets are the most common data format to describe e-component commodities in the ICT supply chain because of human readability. However, with the surging number of electronic documents, it has been far beyond the capacity of human readers, and it is also challenging to process tabular data automatically because of the complex table structures and heterogeneous layouts. Table Structure Recognition (TSR) aims to represent tables with complex structures in a machine-interpretable format so that the tabular data can be processed automatically. In this paper, we formulate TSR as an object detection problem and propose to generate an intuitive representation of a complex table structure to enable structuring of the tabular data related to the commodities. To cope with border-less and small layouts, we propose a cost-sensitive loss function by considering the detection difficulty of each class. Besides, we propose a novel anchor generation method using the character of tables that columns in a table should share an identical height, and rows in a table should share the same width. We implement our proposed method based on Faster-RCNN and achieve 94.79% on mean Average Precision (AP), and consistently improve more than 1.5% AP for different benchmark models.
translated by 谷歌翻译
蜂窝网络(LTE,5G及以后)的增长急剧增长,消费者的需求很高,并且比具有先进的电信技术的其他无线网络更有希望。这些网络的主要目标是将数十亿个设备,系统和用户连接到高速数据传输,高电池容量和低延迟,以及支持广泛的新应用程序,例如虚拟现实,元评估,远程医疗,在线教育,自动驾驶汽车,高级制造等。为了实现这些目标,使用人工智能(AI)方法来实现频谱管理的新方法,以实现这些目标。本文使用基于AI的语义分割模型对光谱传感方法进行了脆弱性分析,以在具有防御性蒸馏方法的情况下识别对抗性攻击下的蜂窝网络信号。结果表明,缓解方法可以显着减少针对对抗攻击的基于AI的光谱传感模型的漏洞。
translated by 谷歌翻译